Yeah, okay, I can hear things.
Okay, the last time I did this, there was an actual person who did all the things, like
actually sitting in the back with a camera and stuff.
Anyway, okay, great.
This seems to work.
So hello, everyone.
You might have noticed that I'm not Professor Kool-Azer.
I'm guessing, by the way, can we maybe turn this a bit down?
I'm not sure that did anything, but okay.
Yeah, I'm obviously not Professor Kool-Azer because he happens to not be here this week.
So you have to do with me for today and tomorrow.
My name is Dennis Müller.
I'm a postdoc.
Yeah.
This is just weird.
How's the post-cord?
Yeah.
Okay, obviously these tools are not intended for long-haired people.
Yeah, so you'll have to do with me.
My name is Dennis Müller.
I'm a postdoc and Professor Kool-Azer's group.
You haven't met me before, but I'm largely responsible for all of the back-end infrastructure
for our nice course portal.
So if any one of you ever did like a bug report or anything, there's a pretty high chance
that ended up in my email inbox.
So thanks for that.
Okay.
We're talking about machine learning stuff, obviously.
In particular, we're working towards trying to get at a general theoretical framework to
talk about learning problems in the first place.
You've seen at least one example.
Aha.
Here we go.
At least one example.
No.
There we go.
At least one example of an actual machine learning algorithm, namely decision trees.
I think you did that.
We've also seen some applications of statistical significance tests, which will be somewhat
more relevant in the future.
And we are now in the process of trying to find a theory that allows us to evaluate
and choose the right hypotheses.
We know what an inductive learning problem is, i.e. a pair consisting of a set of hypotheses
and a function that we try to learn.
We make a couple of assumptions in the first place, obviously, namely that the data that
we're trying to learn from is independent, consists of a sequence of independent and
identically distributed input output pairs, basically, or examples, short iid, which basically
just means every example we assume, the outcome does not depend on the previous outcomes,
and they're all drawn from the same distribution.
We've talked about error rates and cross-validations, i.e. we just basically count for a given
Presenters
Zugänglich über
Offener Zugang
Dauer
01:21:07 Min
Aufnahmedatum
2023-06-06
Hochgeladen am
2023-06-07 16:19:32
Sprache
en-US